Now that I had gotten my ELK server up and running, it was time to ingest and interpret logs from my Ubuntu server. I decided to start by learning the layout of the Kibana UI, figuring out the use for each area of the screen. Here are my initial findings:
Once I was more familiar with my environment I wanted to create my first detection scenario. I thought of doing a simple brute force SSH login, but first wanted to see if there was any vulnerability my server had that could be easily exploited. I booted up Nessus and did a scan, but after an hour of exploring each of the big vulnerabilities there wasn’t anything that I could use as a novice hacker. So brute force SSH it is.
I started by just seeing what it looks like in Kibana to fail an SSH login attempt. I quickly figured out that if you sort by “event.action:’ssh_login’” and “event.outcome:’failure’” you can see all the failed SSH logins, and once you find the user who is failing the login you can sort by “user.name:’name’”
I also tried simulating a priviledge escalation attack by logging in to the server and running “ls”, first as a normal user, then as sudo. I found that I couldn’t find a log for “ls” but executing “sudo su” did get reported under the “process.name” field.
Now that I understood what fields reflect certain attacks I decided to create an alert that will become active when certain fields reflect a potential SSH brute force attack. In Kibana, an alert is set by determining a set of fields that, if present in x documents over a span of y minutes, will send a notification to administrators. For an alert regarding a real brute force attack, the alert could be set to go off after 100+ failed login documents were reported in five minutes or less. However, because I am brute forcing this by hand I set the document threshold to just 5. You also get to determine the number of documents sent along with the report, which I chose to be 10, however you can adjust this number depending on the verbosity you’d prefer.
With the alert set up and enabled, I went back to the command line and failed some SSH login attempts. Sure enough, after I had failed multiple logins the alerts page showed an active alert, and when investigated provided the source IP address of where the attempts where coming from.
I acted as though I was really investigating this alert and plugged the IP address into the field search and found the documents relating to the failed SSH attempts I had created. I could see how many attempts were made, from the multiple failed to the one success.
In a real SOC environment I would escalate the incident according to the organization’s incident response procedures, however in the lab this is where we call it a day. Next I want to try using a command line tool to actually brute force an SSH login, and try to set an alert based on that, or maybe try a new attack vector all together.